2,066 research outputs found

    Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks

    Full text link
    Graph convolutional network (GCN) has been successfully applied to many graph-based applications; however, training a large-scale GCN remains challenging. Current SGD-based algorithms suffer from either a high computational cost that exponentially grows with number of GCN layers, or a large space requirement for keeping the entire graph and the embedding of each node in memory. In this paper, we propose Cluster-GCN, a novel GCN algorithm that is suitable for SGD-based training by exploiting the graph clustering structure. Cluster-GCN works as the following: at each step, it samples a block of nodes that associate with a dense subgraph identified by a graph clustering algorithm, and restricts the neighborhood search within this subgraph. This simple but effective strategy leads to significantly improved memory and computational efficiency while being able to achieve comparable test accuracy with previous algorithms. To test the scalability of our algorithm, we create a new Amazon2M data with 2 million nodes and 61 million edges which is more than 5 times larger than the previous largest publicly available dataset (Reddit). For training a 3-layer GCN on this data, Cluster-GCN is faster than the previous state-of-the-art VR-GCN (1523 seconds vs 1961 seconds) and using much less memory (2.2GB vs 11.2GB). Furthermore, for training 4 layer GCN on this data, our algorithm can finish in around 36 minutes while all the existing GCN training algorithms fail to train due to the out-of-memory issue. Furthermore, Cluster-GCN allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy---using a 5-layer Cluster-GCN, we achieve state-of-the-art test F1 score 99.36 on the PPI dataset, while the previous best result was 98.71 by [16]. Our codes are publicly available at https://github.com/google-research/google-research/tree/master/cluster_gcn.Comment: In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD'19

    Identifiability of the Simplex Volume Minimization Criterion for Blind Hyperspectral Unmixing: The No Pure-Pixel Case

    Full text link
    In blind hyperspectral unmixing (HU), the pure-pixel assumption is well-known to be powerful in enabling simple and effective blind HU solutions. However, the pure-pixel assumption is not always satisfied in an exact sense, especially for scenarios where pixels are heavily mixed. In the no pure-pixel case, a good blind HU approach to consider is the minimum volume enclosing simplex (MVES). Empirical experience has suggested that MVES algorithms can perform well without pure pixels, although it was not totally clear why this is true from a theoretical viewpoint. This paper aims to address the latter issue. We develop an analysis framework wherein the perfect endmember identifiability of MVES is studied under the noiseless case. We prove that MVES is indeed robust against lack of pure pixels, as long as the pixels do not get too heavily mixed and too asymmetrically spread. The theoretical results are verified by numerical simulations

    Signals of New Gauge Bosons in Gauged Two Higgs Doublet Model

    Get PDF
    Recently a gauged two Higgs doublet model, in which the two Higgs doublets are embedded into the fundamental representation of an extra local SU(2)HSU(2)_H group, is constructed. Both the new gauge bosons Z′Z^\prime and W′(p,m)W^{\prime (p,m)} are electrically neutral. While Z′Z^\prime can be singly produced at colliders, W′(p,m)W^{\prime (p,m)}, which is heavier, must be pair produced. We explore the constraints of Z′Z^\prime using the current Drell-Yan type data from the Large Hadron Collider. Anticipating optimistically that Z′Z^\prime can be discovered via the clean Drell-Yan type signals at high luminosity upgrade of the collider, we explore the detectability of extra heavy fermions in the model via the two leptons/jets plus missing transverse energy signals from the exotic decay modes of Z′Z^\prime. For the W′(p,m)W^{\prime (p,m)} pair production in a future 100 TeV proton-proton collider, we demonstrate certain kinematical distributions for the two/four leptons plus missing energy signals have distinguishable features from the Standard Model background. In addition, comparisons of these kinematical distributions between the gauged two Higgs doublet model and the littlest Higgs model with T-parity, the latter of which can give rise to the same signals with competitive if not larger cross sections, are also presented.Comment: 39 pages, 23 figures, 7 tables and two new appendixes, to appear in EPJ

    An SoC-Based System for Real-time Contactless Measurement of Human Vital Signs and Soft Biometrics

    Get PDF
    Computer vision (CV) plays big role in our current society's life style. The advancement of CV technology brings the capability to sense human vital sign and soft biometric parameters in contactless way. In this work, we design and implement the contactless human vital sign parameters measurement including pulse rate (PR) and respiration rate (RR) and also for assessment of human soft biometric parameters i.e. age, gender, skin color type, and body height. Our designed system is based on system on chip (SoC) device which run both FPGA and hard processor while provides real-time operation and small form factor. Experimental results shows our device performance has mean absolute error (MAE) 2.85 and 1.46 bpm for PR and RR respectively compared to clinical apparatus. While, for soft biometric parameters measurement we got unsatisfied results on age and gender estimation with accuracy of 58% and 74% respectively. However, for skin color type and body height measurement we reach high accuracy with 98 % and 2.28 cm respectively on both parameters
    • …
    corecore